As ChatGPT, the innovative AI-powered chatbot developed by OpenAI, gains popularity among workers in the United States for accomplishing routine tasks, a cloud of uncertainty hovers over its future. Businesses have begun questioning the extent to which ChatGPT should be embraced while safeguarding against potential data security breaches and intellectual property leaks. This intriguing juxtaposition between increasing adoption and the imposition of limitations by industry giants like Microsoft and Google has cast a spotlight on the potential trajectory of ChatGPT in corporate settings.
With global companies at a crossroads, the delicate balance between leveraging ChatGPT's impressive capabilities and mitigating potential risks is a central concern. ChatGPT's generative AI offers a versatile chatbot capable of engaging in conversations and providing responses to diverse queries. Yet, alongside its allure, a shadow of uncertainty looms, driven by apprehensions about data exposure and proprietary knowledge vulnerabilities.
The months since its November debut have done well to showcase ChatGPT's utility in everyday tasks such as drafting emails, summarizing documents, and performing preliminary research — and the corporate world has certainly begun to take advantage of its utility. A recent Reuters/Ipsos survey conducted online between July 11 and 17, unveiled that 28% of over 2,600 U.S. respondents have already integrated ChatGPT into their professional routines.
The survey also noted, however, a significant amount of corporate pushback: 10% of respondents indicated strict bans on external AI tool usage enforced by their employers, while approximately 25% remained uncertain about their company's stance. Furthermore, only 22% reported explicit permission from their employers to use AI tools.
However, the rise in adoption has stoked concerns about data privacy and potential intellectual property breaches. One notable challenge is the accessibility of conversations generated by ChatGPT to third-party human reviewers, raising valid concerns about data replication and knowledge dissemination. ChatGPT's rapid ascent since its launch in November has also attracted regulatory scrutiny, particularly in Europe, where data collection practices have come under the microscope of privacy watchdogs.
The ambiguous landscape extends to corporate policies governing ChatGPT's usage. As companies grapple with this evolving terrain, they face the challenge of balancing AI-driven efficiency with the protection of intellectual assets. Notably, Samsung Electronics recently imposed a global ban on ChatGPT and similar AI tools due to concerns about sensitive data exposure, showcasing a cautious approach some corporations are adopting.
On the other end of the spectrum, some companies are cautiously embracing ChatGPT, carefully evaluating its benefits against potential pitfalls. For instance, Coca-Cola is actively testing AI's potential to enhance operational efficiency while safeguarding sensitive data within established firewalls. Likewise, Tate & Lyle, a global ingredients manufacturer, is experimenting with ChatGPT in controlled settings to ascertain its potential for boosting productivity.
Yet, as companies navigate this landscape, the underlying question remains: How will ChatGPT be regulated in the future? Roy Keidar, a legal partner at Arnon, Tadmor-Levy law firm and an expert in emerging technologies, shed light on the evolving legal considerations surrounding ChatGPT and similar AI tools.
How will AI like ChatGPT be regulated in the future?
“I'd be surprised to see a complete ban on generative AI tools,” Keidar said, citing instances where certain regions, like Italy, have briefly banned ChatGPT due to specific privacy violations, but emphasizing that these instances are distinct and driven by clear violations of existing laws.
However, Keidar highlighted the multifaceted legal challenges associated with generative AI. “ChatGPT raises numerous legal issues, not just privacy and data security, but also issues related to liability, and IP protection, and hallucinations — [misinformation generated as] a byproduct of using tools like ChatGPT,” he said.
While the current focus of regulations and guidelines surrounding ChatGPT lies on commercial relations between companies and users, Keidar anticipates an expansion into more sector-specific rules and procedures, tailored to different industries' unique requirements.
Amid the evolving legal landscape, the growing integration of ChatGPT and AI tools into the fabric of corporate environments remains certain. Keidar envisions a future where ChatGPT becomes as accessible as Microsoft Office or other common productivity tools, used across various departments for diverse purposes. This integration, however, comes with a responsibility to establish secure procedures and guidelines to protect against potential pitfalls.
“The fact that you're going to use it more only further exposes the vulnerability of your personal data, or your or the company's IP or the other services or products you actually sell to your clients,” he said. “We need to understand who's going to be liable for that [vulnerability], who's going to be responsible — and how do we protect our clients' secrets and freedom and IP issues?”
“Generative AI is a great development,” Keidar concluded. “It's probably one of the greatest innovations ever, certainly for the last decade or so, maybe even more. But of course, it comes with a cost — and the cost will be establishing secure procedures, rules, and guidelines for how to use it responsibly.”